56 research outputs found
Metareasoning for Planning Under Uncertainty
The conventional model for online planning under uncertainty assumes that an
agent can stop and plan without incurring costs for the time spent planning.
However, planning time is not free in most real-world settings. For example, an
autonomous drone is subject to nature's forces, like gravity, even while it
thinks, and must either pay a price for counteracting these forces to stay in
place, or grapple with the state change caused by acquiescing to them. Policy
optimization in these settings requires metareasoning---a process that trades
off the cost of planning and the potential policy improvement that can be
achieved. We formalize and analyze the metareasoning problem for Markov
Decision Processes (MDPs). Our work subsumes previously studied special cases
of metareasoning and shows that in the general case, metareasoning is at most
polynomially harder than solving MDPs with any given algorithm that disregards
the cost of thinking. For reasons we discuss, optimal general metareasoning
turns out to be impractical, motivating approximations. We present approximate
metareasoning procedures which rely on special properties of the BRTDP planning
algorithm and explore the effectiveness of our methods on a variety of
problems.Comment: Extended version of IJCAI 2015 pape
Improving Offline RL by Blending Heuristics
We propose Heuristic Blending (HUBL), a simple performance-improving
technique for a broad class of offline RL algorithms based on value
bootstrapping. HUBL modifies Bellman operators used in these algorithms,
partially replacing the bootstrapped values with Monte-Carlo returns as
heuristics. For trajectories with higher returns, HUBL relies more on
heuristics and less on bootstrapping; otherwise, it leans more heavily on
bootstrapping. We show that this idea can be easily implemented by relabeling
the offline datasets with adjusted rewards and discount factors, making HUBL
readily usable by many existing offline RL implementations. We theoretically
prove that HUBL reduces offline RL's complexity and thus improves its
finite-sample performance. Furthermore, we empirically demonstrate that HUBL
consistently improves the policy quality of four state-of-the-art
bootstrapping-based offline RL algorithms (ATAC, CQL, TD3+BC, and IQL), by 9%
on average over 27 datasets of the D4RL and Meta-World benchmarks
Safe Reinforcement Learning via Curriculum Induction
In safety-critical applications, autonomous agents may need to learn in an
environment where mistakes can be very costly. In such settings, the agent
needs to behave safely not only after but also while learning. To achieve this,
existing safe reinforcement learning methods make an agent rely on priors that
let it avoid dangerous situations during exploration with high probability, but
both the probabilistic guarantees and the smoothness assumptions inherent in
the priors are not viable in many scenarios of interest such as autonomous
driving. This paper presents an alternative approach inspired by human
teaching, where an agent learns under the supervision of an automatic
instructor that saves the agent from violating constraints during learning. In
this model, we introduce the monitor that neither needs to know how to do well
at the task the agent is learning nor needs to know how the environment works.
Instead, it has a library of reset controllers that it activates when the agent
starts behaving dangerously, preventing it from doing damage. Crucially, the
choices of which reset controller to apply in which situation affect the speed
of agent learning. Based on observing agents' progress, the teacher itself
learns a policy for choosing the reset controllers, a curriculum, to optimize
the agent's final policy reward. Our experiments use this framework in two
environments to induce curricula for safe and efficient learning
- …